Goto

Collaborating Authors

 virtual character


Social-Physical Interactions with Virtual Characters: Evaluating the Impact of Physicality through Encountered-Type Haptics

Godden, Eric, Groenewegen, Jacquie, Wheeler, Michael, Pan, Matthew K. X. J.

arXiv.org Artificial Intelligence

This work investigates how robot-mediated physicality influences the perception of social-physical interactions with virtual characters. ETHOS (Encountered-Type Haptics for On-demand Social interaction) is an encountered-type haptic display that integrates a torque-controlled manipulator and interchangeable props with a VR headset to enable three gestures: object handovers, fist bumps, and high fives. We conducted a user study to examine how ETHOS adds physicality to virtual character interactions and how this affects presence, realism, enjoyment, and connection metrics. Each participant experienced one interaction under three conditions: no physicality (NP), static physicality (SP), and dynamic physicality (DP). SP extended the purely virtual baseline (NP) by introducing tangible props for direct contact, while DP further incorporated motion and impact forces to emulate natural touch. Results show presence increased stepwise from NP to SP to DP. Realism, enjoyment, and connection also improved with added physicality, though differences between SP and DP were not significant. Comfort remained consistent across conditions, indicating no added psychological friction. These findings demonstrate the experiential value of ETHOS and motivate the integration of encountered-type haptics into socially meaningful VR experiences.


Chatbots encouraged our sons to kill themselves, mothers say

BBC News

'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves Megan Garcia had no idea her teenage son Sewell, a bright and beautiful boy, had started spending hours and hours obsessively talking to an online character on the Character.ai It's like having a predator or a stranger in your home, Ms Garcia tells me in her first UK interview. And it is much more dangerous because a lot of the times children hide it - so parents don't know. Within ten months, Sewell, 14, was dead. He had taken his own life.


React to This (RTT): A Nonverbal Turing Test for Embodied AI

Zhang, Chuxuan, Etesam, Yasaman, Lim, Angelica

arXiv.org Artificial Intelligence

We propose an approach to test embodied AI agents for interaction awareness and believability, particularly in scenarios where humans push them to their limits. Turing introduced the Imitation Game as a way to explore the question: "Can machines think?" The Total Turing Test later expanded this concept beyond purely verbal communication, incorporating perceptual and physical interaction. Building on this, we propose a new guiding question: "Can machines react?" and introduce the React to This (RTT) test for nonverbal behaviors, presenting results from an initial experiment. In 1950, Turing [1] proposed the "imitation game" as a way to address the question: "Can machines think?" Since then, numerous attempts have been made to pass this test [2]. One of the earliest systems to highlight how surface-level language mimicry could deceive users was ELIZA [3], developed in 1965.


California Senate passes bill that aims to make AI chatbots safer

Los Angeles Times

California lawmakers on Tuesday moved one step closer to placing more guardrails around artificial intelligence-powered chatbots. The Senate passed a bill that aims to make chatbots used for companionship safer after parents raised concerns that virtual characters harmed their childrens' mental health. An artificial intelligence startup is under fire for allegedly releasing chatbots that harmed the mental health of young people. The legislation, which now heads to the California State Assembly, shows how state lawmakers are tackling safety concerns surrounding AI as tech companies release more AI-powered tools. "The country is watching again for California to lead," said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, on the Senate floor.


An 'AI Scientist' Is Inventing and Running Its Own Experiments

WIRED

At first glance, a recent batch of research papers produced by a prominent artificial intelligence lab at the University of British Columbia in Vancouver might not seem that notable. Featuring incremental improvements on existing algorithms and ideas, they read like the contents of a middling AI conference or journal. But the research is, in fact, remarkable. That's because it's entirely the work of an "AI scientist" developed at the UBC lab together with researchers from the University of Oxford and a startup called Sakana AI. The project demonstrates an early step toward what might prove a revolutionary trick: letting AI learn by inventing and exploring novel ideas.


CharacterChat: Learning towards Conversational AI with Personalized Social Support

Tu, Quan, Chen, Chuanqi, Li, Jinpeng, Li, Yanran, Shang, Shuo, Zhao, Dongyan, Wang, Ran, Yan, Rui

arXiv.org Artificial Intelligence

In our modern, fast-paced, and interconnected world, the importance of mental well-being has grown into a matter of great urgency. However, traditional methods such as Emotional Support Conversations (ESC) face challenges in effectively addressing a diverse range of individual personalities. In response, we introduce the Social Support Conversation (S2Conv) framework. It comprises a series of support agents and the interpersonal matching mechanism, linking individuals with persona-compatible virtual supporters. Utilizing persona decomposition based on the MBTI (Myers-Briggs Type Indicator), we have created the MBTI-1024 Bank, a group that of virtual characters with distinct profiles. Through improved role-playing prompts with behavior preset and dynamic memory, we facilitate the development of the MBTI-S2Conv dataset, which contains conversations between the characters in the MBTI-1024 Bank. Building upon these foundations, we present CharacterChat, a comprehensive S2Conv system, which includes a conversational model driven by personas and memories, along with an interpersonal matching plugin model that dispatches the optimal supporters from the MBTI-1024 Bank for individuals with specific personas. Empirical results indicate the remarkable efficacy of CharacterChat in providing personalized social support and highlight the substantial advantages derived from interpersonal matching. The source code is available in \url{https://github.com/morecry/CharacterChat}.


Controllable Motion Diffusion Model

Shi, Yi, Wang, Jingbo, Jiang, Xuekun, Dai, Bo

arXiv.org Artificial Intelligence

Generating realistic and controllable motions for virtual characters is a challenging task in computer animation, and its implications extend to games, simulations, and virtual reality. Recent studies have drawn inspiration from the success of diffusion models in image generation, demonstrating the potential for addressing this task. However, the majority of these studies have been limited to offline applications that target at sequence-level generation that generates all steps simultaneously. To enable real-time motion synthesis with diffusion models in response to time-varying control signals, we propose the framework of the Controllable Motion Diffusion Model (COMODO). Our framework begins with an auto-regressive motion diffusion model (A-MDM), which generates motion sequences step by step. In this way, simply using the standard DDPM algorithm without any additional complexity, our framework is able to generate high-fidelity motion sequences over extended periods with different types of control signals. Then, we propose our reinforcement learning-based controller and controlling strategies on top of the A-MDM model, so that our framework can steer the motion synthesis process across multiple tasks, including target reaching, joystick-based control, goal-oriented control, and trajectory following. The proposed framework enables the real-time generation of diverse motions that react adaptively to user commands on-the-fly, thereby enhancing the overall user experience. Besides, it is compatible with the inpainting-based editing methods and can predict much more diverse motions without additional fine-tuning of the basic motion generation models. We conduct comprehensive experiments to evaluate the effectiveness of our framework in performing various tasks and compare its performance against state-of-the-art methods.


Towards social embodied cobots: The integration of an industrial cobot with a social virtual agent

Nicora, Matteo Lavit, Beyrodt, Sebastian, Tsovaltzi, Dimitra, Nunnari, Fabrizio, Gebhard, Patrick, Malosio, Matteo

arXiv.org Artificial Intelligence

The integration of the physical capabilities of an industrial collaborative robot with a social virtual character may represent a viable solution to enhance the workers' perception of the system as an embodied social entity and increase social engagement and well-being at the workplace. An online study was setup using prerecorded video interactions in order to pilot potential advantages of different embodied configurations of the cobot-avatar system in terms of perceptions of Social Presence, cobot-avatar Unity and Social Role of the system, and explore the relation of these. In particular, two different configurations were explored and compared: the virtual character was displayed either on a tablet strapped onto the base of the cobot or on a large TV screen positioned at the back of the workcell. The results imply that participants showed no clear preference based on the constructs, and both configurations fulfill these basic criteria. In terms of the relations between the constructs, there were strong correlations between perception of Social Presence, Unity and Social Role (Collegiality). This gives a valuable insight into the role of these constructs in the perception of cobots as embodied social entities, and towards building cobots that support well-being at the workplace.


La veille de la cybersécurité

#artificialintelligence

If software is eating the world, AI isn't far behind. AI-powered text-, art- and audio-generating systems will soon make -- and already are making -- their way into the tools people use every day, from programming environments and spellcheck plugins to concept art creation platforms. The video game industry is no exception to this, and that hardly comes as a surprise. As illustrated by games like AI Dungeon, AI -- while imperfect -- can inject surprising creativity and novelty into branching narrative storytelling. Inworld AI was founded on this premise.


Inworld closes $50M Series A for its realistic NPC generator

#artificialintelligence

Inworld, a Disney-backed startup using AI to create realistic non-playable virtual characters (NPCs), has closed a $50 million Series A funding round. The startup has attracted interest for its ability to design and deploy interactive characters with more realistic interactions across the metaverse and other virtual words like video games. In current video games, NPCs have pre-scripted responses. AI-powered virtual characters like those Inworld are developing can offer dynamic responses to general questions about the local area or wider world. While graphics have generally become more immersive over the years, interactions have largely remained the same.